Common Criteria

The Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. It is currently in version 3.1.[1]

Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements, vendors can then implement and/or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. In other words, Common Criteria provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard manner.

Contents

Key concepts

Common Criteria evaluations are performed on computer security products and systems.

The evaluation serves to validate claims made about the target. To be of practical use, the evaluation must verify the target's security features. This is done through the following:

The evaluation process also tries to establish the level of confidence that may be placed in the product's security features through quality assurance processes:

So far, most PPs and most evaluated STs/certified products have been for IT components (e.g., firewalls, operating systems, smart cards). Common Criteria certification is sometimes specified for IT procurement. Other standards containing, e.g., interoperation, system management, user training, supplement CC and other product standards. Examples include the ISO/IEC 17799 (Or more properly BS 7799-1, which is now ISO/IEC 27002) or the German IT-Grundschutzhandbuch.

Details of cryptographic implementation within the TOE are outside the scope of the CC. Instead, national standards, like FIPS 140-2 give the specifications for cryptographic modules, and various standards specify the cryptographic algorithms in use.

History

CC originated out of three standards:

CC was produced by unifying these pre-existing standards, predominantly so that companies selling computer products for the government market (mainly for Defence or Intelligence use) would only need to have them evaluated against one set of standards. The CC was developed by the governments of Canada, France, Germany, the Netherlands, the UK, and the U.S.

Testing organizations

All testing laboratories must comply with ISO 17025, and certification bodies will normally be approved against either ISO/IEC Guide 65 or BS EN 45011.

The compliance with ISO 17025 is typically demonstrated to a National approval authority:

Characteristics of these organizations were examined and presented at ICCC 10.[2]

Mutual recognition arrangement

As well as the Common Criteria standard, there is also a sub-treaty level Common Criteria MRA (Mutual Recognition Arrangement), whereby each party thereto recognizes evaluations against the Common Criteria standard done by other parties. Originally signed in 1998 by Canada, France, Germany, the United Kingdom and the United States, Australia and New Zealand joined 1999, followed by Finland, Greece, Israel, Italy, the Netherlands, Norway and Spain in 2000. The Arrangement has since been renamed Common Criteria Recognition Arrangement (CCRA) and membership continues to expand. Within the CCRA only evaluations up to EAL 4 are mutually recognized (Including augmentation with flaw remediation). The European countries within the former ITSEC agreement typically recognize higher EALs as well. Evaluations at EAL5 and above tend to involve the security requirements of the host nation's government.

List of Abbreviations

Issues

Requirements

Common Criteria is very generic; it does not directly provide a list of product security requirements or features for specific (classes of) products: this follows the approach taken by ITSEC, but has been a source of debate to those used to the more prescriptive approach of other earlier standards such as TCSEC and FIPS 140-2.

Value of certification

If a product is Common Criteria certified, it does not necessarily mean it is completely secure. For example, various Microsoft Windows versions, including Windows Server 2003 and Windows XP, have been certified at EAL4+, but regular security patches for security vulnerabilities are still published by Microsoft for these Windows systems. This is possible because the process of obtaining a Common Criteria certification allows a vendor to restrict the analysis to certain security features and to make certain assumptions about the operating environment and the strength of threats, if any, faced by the product in that environment. In this case, the assumptions include A.PEER:

Any other systems with which the TOE communicates are assumed to be under the same management control and operate under the same security policy constraints. The TOE is applicable to networked or distributed environments only if the entire network operates under the same constraints and resides within a single management domain. There are no security requirements that address the need to trust external systems or the communications links to such systems.

as contained in the Controlled Access Protection Profile (CAPP) to which their STs refer. Based on this and other assumptions, which are not realistic for the common use of general-purpose operating systems, the claimed security functions of the Windows products are evaluated. Thus they should only be considered secure in the assumed, specified circumstances, also known as the evaluated configuration, specified by Microsoft.

Whether you run Microsoft Windows in the precise evaluated configuration or not, you should apply Microsoft's security patches for the vulnerabilities in Windows as they continue to appear. If any of these security vulnerabilities are exploitable in the product's evaluated configuration, the product's Common Criteria certification should be voluntarily withdrawn by the vendor. Alternatively, the vendor should re-evaluate the product to include application of patches to fix the security vulnerabilities within the evaluated configuration. Failure by the vendor to take either of these steps would result in involuntary withdrawal of the product's certification by the certification body of the country in which the product was evaluated.

The certified Microsoft Windows versions remain at EAL4+ without including the application of any Microsoft security vulnerability patches in their evaluated configuration. This shows both the limitation and strength of an evaluated configuration.

Criticisms

In August 2007, Government Computing News (GCN) columnist William Jackson critically examined Common Criteria methodology and its US implementation by the Common Criteria Evaluation and Validation Scheme (CCEVS).[3] In the column executives from the security industry, researchers, and representatives from the National Information Assurance Partnership (NIAP) were interviewed. Objections outlined in the article include:

In a 2006 research paper, computer specialist David A. Wheeler suggested that the Common Criteria process discriminates against Free and Open Source Software (FOSS)-centric organizations and development models.[4] Common Criteria assurance requirements tend to be inspired by the traditional waterfall software development methodology. In contrast, much FOSS software is produced using modern agile paradigms. Although some have argued that both paradigms do not align well,[5] others have attempted to reconcile both paradigms.[6]

Alternative approaches

Throughout the lifetime of CC, it has not been universally adopted even by the creator nations, with, in particular, cryptographic approvals being handled separately, such as by the Canadian / US implementation of FIPS-140, and the CESG Assisted Products Scheme (CAPS)[7] in the UK.

The UK has also produced a number of alternative schemes when the timescales, costs and overheads of mutual recognition have been found to be impeding the operation of the market:

In early 2011, NSA/CSS published a paper by Chris Salter, which proposed a Protection Profile oriented approach towards evaluation. In this approach, communities of interest form around technology types which in turn develop protection profiles that define the evaluation methodology for the technology type.[9] The objective is a more robust evaluation. There is some concern that this may have a negative impact on mutual recognition.[10]

See also

References

External links